Goto

Collaborating Authors

 light level


Improve the User Experience in Your Mobile App by Using Low-light Enhancement Tech

#artificialintelligence

When visibility is low at night and you turn on your smartphone camera, the video preview is full of darkness, and the visibility is even lower than what you can see. With the surging of real-time video apps, we have seen various video enhancement technologies (such as beautification or AR stickers) that make a video look better than it is. You may wonder if there is a technology that can make video look clearer than it is in low light conditions. The answer is surely a YES. And there are a few more scenarios where there are strong demands for low-light image enhancement technology as follows.


Learning Behaviors with Uncertain Human Feedback

He, Xu, Chen, Haipeng, An, Bo

arXiv.org Artificial Intelligence

Human feedback is widely used to train agents in many domains. However, previous works rarely consider the uncertainty when humans provide feedback, especially in cases that the optimal actions are not obvious to the trainers. For example, the reward of a sub-optimal action can be stochastic and sometimes exceeds that of the optimal action, which is common in games or real-world. Trainers are likely to provide positive feedback to sub-optimal actions, negative feedback to the optimal actions and even do not provide feedback in some confusing situations. Existing works, which utilize the Expectation Maximization (EM) algorithm and treat the feedback model as hidden parameters, do not consider uncertainties in the learning environment and human feedback. To address this challenge, we introduce a novel feedback model that considers the uncertainty of human feedback. However, this incurs intractable calculus in the EM algorithm. To this end, we propose a novel approximate EM algorithm, in which we approximate the expectation step with the Gradient Descent method. Experimental results in both synthetic scenarios and two real-world scenarios with human participants demonstrate the superior performance of our proposed approach.


One step closer to creating the world's first bionic EYE: Scientists 3D print a prototype 'eyeball'

Daily Mail - Science & tech

Scientists have taken another important step towards building the world's first bionic eye, which could give millions of blind people the chance to see again. In a world first, a team of researchers have built a three-dimensional artificial'eyeball' capable of detecting changes in light levels. The bionic eye, which mimics the function of the retina in order to restore sight, works in tandem with an implant to convert the images it sees into electrical impulses for the retinal cells, which carry image signals back to the brain. By using 3D printing, scientists were able to produce the prototype much faster than previous efforts – sparking hope this could be a viable commercial solution in future. However, there is no date on when a final version will be ready for patients.


Artificial Intelligence Is Predicting Human Poverty From Space

#artificialintelligence

Getting aid to impoverished Africans is hard enough, what with blockades of bureaucracy and red tape. But in many African countries, bad data, or a lack of it, makes distributing funds even more troublesome. "Fighting poverty has always been this shining goal of the modern world," Neal Jean, a doctoral student in computer science at Stanford University's School of Engineering, told me. "It's the number one priority for the United Nation's 2030 Agenda for Sustainable Development, but the major challenge is that there's not enough reliable data. It's really hard to help impoverished people when you don't know where they are."


Environmental Sensing by Wearable Device for Indoor Activity and Location Estimation

Jin, Ming, Zou, Han, Weekly, Kevin, Jia, Ruoxi, Bayen, Alexandre M., Spanos, Costas J.

arXiv.org Machine Learning

We present results from a set of experiments in this pilot study to investigate the causal influence of user activity on various environmental parameters monitored by occupant carried multi-purpose sensors. Hypotheses with respect to each type of measurements are verified, including temperature, humidity, and light level collected during eight typical activities: sitting in lab / cubicle, indoor walking / running, resting after physical activity, climbing stairs, taking elevators, and outdoor walking. Our main contribution is the development of features for activity and location recognition based on environmental measurements, which exploit location- and activity-specific characteristics and capture the trends resulted from the underlying physiological process. The features are statistically shown to have good separability and are also information-rich. Fusing environmental sensing together with acceleration is shown to achieve classification accuracy as high as 99.13%. For building applications, this study motivates a sensor fusion paradigm for learning individualized activity, location, and environmental preferences for energy management and user comfort.